Name | Version | Summary | date |
agentic_security |
0.3.4 |
Agentic LLM vulnerability scanner |
2024-12-15 17:24:41 |
agentdojo |
0.1.22 |
A Dynamic Environment to Evaluate Attacks and Defenses for LLM Agents |
2024-11-19 16:52:29 |
llama-index-packs-zenguard |
0.3.0 |
llama-index packs zenguard integration |
2024-11-17 22:43:21 |
llama-index-packs-llama-guard-moderator |
0.3.0 |
llama-index packs llama_guard_moderator integration |
2024-11-17 22:42:41 |
prompt-protect |
0.1 |
An NLP classification for detecting prompt injection |
2024-09-02 22:57:55 |
llm-guard |
0.3.15 |
LLM-Guard is a comprehensive tool designed to fortify the security of Large Language Models (LLMs). By offering sanitization, detection of harmful language, prevention of data leakage, and resistance against prompt injection attacks, LLM-Guard ensures that your interactions with LLMs remain safe and secure. |
2024-08-22 19:39:48 |
langalf |
0.0.4 |
Agentic LLM vulnerability scanner |
2024-04-15 12:40:16 |
last_layer |
0.1.32 |
Ultra-fast, Low Latency LLM security solution |
2024-04-05 12:38:46 |